44 research outputs found

    Slow feature analysis yields a rich repertoire of complex cell properties

    Get PDF
    In this study, we investigate temporal slowness as a learning principle for receptive fields using slow feature analysis, a new algorithm to determine functions that extract slowly varying signals from the input data. We find that the learned functions trained on image sequences develop many properties found also experimentally in complex cells of primary visual cortex, such as direction selectivity, non-orthogonal inhibition, end-inhibition and side-inhibition. Our results demonstrate that a single unsupervised learning principle can account for such a rich repertoire of receptive field properties

    On the analysis and interpretation of inhomogeneous quadratic forms as receptive fields

    Get PDF
    In this paper we introduce some mathematical and numerical tools to analyze and interpret inhomogeneous quadratic forms. The resulting characterization is in some aspects similar to that given by experimental studies of cortical cells, making it particularly suitable for application to second-order approximations and theoretical models of physiological receptive fields. We first discuss two ways of analyzing a quadratic form by visualizing the coefficients of its quadratic and linear term directly and by considering the eigenvectors of its quadratic term. We then present an algorithm to compute the optimal excitatory and inhibitory stimuli, i.e. the stimuli that maximize and minimize the considered quadratic form, respectively, given a fixed energy constraint. The analysis of the optimal stimuli is completed by considering their invariances, which are the transformations to which the quadratic form is most insensitive. We introduce a test to determine which of these are statistically significant. Next we propose a way to measure the relative contribution of the quadratic and linear term to the total output of the quadratic form. Furthermore, we derive simpler versions of the above techniques in the special case of a quadratic form without linear term and discuss the analysis of such functions in previous theoretical and experimental studies. In the final part of the paper we show that for each quadratic form it is possible to build an equivalent two-layer neural network, which is compatible with (but more general than) related networks used in some recent papers and with the energy model of complex cells. We show that the neural network is unique only up to an arbitrary orthogonal transformation of the excitatory and inhibitory subunits in the first layer

    The Army of One (Sample): the Characteristics of Sampling-based Probabilistic Neural Representations

    Get PDF
    There is growing evidence that humans and animals represent the uncertainty associated with sensory stimuli and utilize this uncertainty during planning and decision making in a statistically optimal way. Recently, a nonparametric framework for representing probabilistic information has been proposed whereby neural activity encodes samples from the distribution over external variables. Although such sample-based probabilistic representations have strong empirical and theoretical support, two major issues need to be clarified before they can be considered as viable candidate theories of cortical computation. First, in a fluctuating natural environment, can neural dynamics provide sufficient samples to accurately estimate a stimulus? Second, can such a code support accurate learning over biologically plausible time-scales? Although it is well known that sampling is statistically optimal if the number of samples is unlimited, biological constraints mean that estimation and learning in the cortex must be supported by a relatively small number of possibly dependent samples. We explored these issues in a cue combination task by comparing a neural circuit that employed a sampling-based representation to an optimal estimator. For static stimuli, we found that a single sample is sufficient to obtain an estimator with less than twice the optimal variance, and that performance improves with the inverse square root of the number of samples. For dynamic stimuli, with linear-Gaussian evolution, we found that the efficiency of the estimation improves significantly as temporal information stabilizes the estimate, and because sampling does not require a burn-in phase. Finally, we found that using a single sample, the dynamic model can accurately learn the parameters of the input neural populations up to a general scaling factor, which disappears for modest sample size. These results suggest that sample-based representations can support estimation and learning using a relatively small number of samples and are therefore highly feasible alternatives for performing probabilistic cortical computations.
&#xa

    Learning complex tasks with probabilistic population codes

    Get PDF
    Recent psychophysical experiments imply that the brain employs a neural representation of the uncertainty in sensory stimuli and that probabilistic computations are supported by the cortex. Several candidate neural codes for uncertainty have been posited including Probabilistic Population Codes (PPCs). PPCs support various versions of probabilistic inference and marginalisation in a neurally plausible manner. However, in order to establish whether PPCs can be of general use, three important limitations must be addressed. First, it is critical that PPCs support learning. For example, during cue combination, subjects are able to learn the uncertainties associated with the sensory cues as well as the prior distribution over the stimulus. However, previous modelling work with PPCs requires these parameters to be carefully set by hand. Second, PPCs must be able to support inference in non-linear models. Previous work has focused on linear models and it is not clear whether non-linear models can be implemented in a neurally plausible manner. Third, PPCs must be shown to scale to high-dimensional problems with many variables. This contribution addresses these three limitations of PPCs by establishing a connection with variational Expectation Maximisation (vEM). In particular, we show that the usual PPC update for cue combination can be interpreted as the E-Step of a vEM algorithm. The corresponding M-Step then automatically provides a method for learning the parameters of the model by adapting the connection strengths in the PPC network in an unsupervised manner. Using a version of sparse coding as an example, we show that the vEM interpretation of PPC can be extended to non-linear and multi-dimensional models and we show how the approach scales with the dimensionality of the problem. Our results provide a rigorous assessment of the ability of PPCs to capture the probabilistic computations performed in the cortex.
&#xa

    Modular Toolkit for Data Processing (MDP): A Python Data Processing Framework

    Get PDF
    Modular toolkit for Data Processing (MDP) is a data processing framework written in Python. From the user's perspective, MDP is a collection of supervised and unsupervised learning algorithms and other data processing units that can be combined into data processing sequences and more complex feed-forward network architectures. Computations are performed efficiently in terms of speed and memory requirements. From the scientific developer's perspective, MDP is a modular framework, which can easily be expanded. The implementation of new algorithms is easy and intuitive. The new implemented units are then automatically integrated with the rest of the library. MDP has been written in the context of theoretical research in neuroscience, but it has been designed to be helpful in any context where trainable data processing algorithms are used. Its simplicity on the user's side, the variety of readily available algorithms, and the reusability of the implemented units make it also a useful educational tool

    A Structured Model of Video Reproduces Primary Visual Cortical Organisation

    Get PDF
    The visual system must learn to infer the presence of objects and features in the world from the images it encounters, and as such it must, either implicitly or explicitly, model the way these elements interact to create the image. Do the response properties of cells in the mammalian visual system reflect this constraint? To address this question, we constructed a probabilistic model in which the identity and attributes of simple visual elements were represented explicitly and learnt the parameters of this model from unparsed, natural video sequences. After learning, the behaviour and grouping of variables in the probabilistic model corresponded closely to functional and anatomical properties of simple and complex cells in the primary visual cortex (V1). In particular, feature identity variables were activated in a way that resembled the activity of complex cells, while feature attribute variables responded much like simple cells. Furthermore, the grouping of the attributes within the model closely parallelled the reported anatomical grouping of simple cells in cat V1. Thus, this generative model makes explicit an interpretation of complex and simple cells as elements in the segmentation of a visual scene into basic independent features, along with a parametrisation of their moment-by-moment appearances. We speculate that such a segmentation may form the initial stage of a hierarchical system that progressively separates the identity and appearance of more articulated visual elements, culminating in view-invariant object recognition

    Temporal slowness as an unsupervised learning principle

    Get PDF
    In dieser Doktorarbeit untersuchen wir zeitliche Langsamkeit als Prinzip für die Selbstorganisation des sensorischen Kortex sowie für computer-basierte Mustererkennung. Wir beginnen mit einer Einführung und Diskussion dieses Prinzips und stellen anschliessend den Slow Feature Analysis (SFA) Algorithmus vor, der das matemathisches Problem für diskrete Zeitreihen in einem endlich dimensionalen Funktionenraum löst. Im Hauptteil der Doktorarbeit untersuchen wir zeitliche Langsamkeit als Lernprinzip für rezeptive Felder im visuellen Kortex. Unter Verwendung von SFA werden Transformationsfunktionen gelernt, die, angewendet auf natürliche Bildsequenzen, möglichst langsam variierende Merkmale extrahieren. Die Funktionen können als nichtlineare raum-zeitliche rezeptive Felder interpretiert und mit Neuronen im primären visuellen Kortex (V1) verglichen werden. Wir zeigen, dass sie viele Eigenschaften von komplexen Zellen in V1 besitzen, nicht nur die primären, d.h. Gabor-ähnliche optimale Stimuli und Phaseninvarianz, sondern auch sekundäre, wie Richtungsselektivität, nicht-orthogonale Inhibition sowie End- und Seiteninhibition. Diese Resultate zeigen, dass ein einziges unüberwachtes Lernprinzip eine solche Mannigfaltigkeit an Eigenschaften begründen kann. Für die Analyse der mit SFA gelernten nichtlinearen Funktionen haben wir eine Reihe mathematischer und numerischer Werkzeuge entwickelt, mit denen man die quadratischen Formen als rezeptive Felder charakterisieren kann. Wir erweitern sie im weiteren Verlauf, um sie von allgemeinerem Interesse für theoretische und physiologische Modelle zu machen. Den Abschluss dieser Arbeit bildet die Anwendung des Prinzips der zeitlichen Langsamkeit auf Mustererkennungsprobleme. Die fehlende zeitliche Struktur in dieser Problemklasse erfordert eine Modifikation des SFA-Algorithmus. Wir stellen eine alternative Formulierung vor und wenden diese auf eine Standard-Datenbank von handgeschriebenen Ziffern an.In this thesis we investigate the relevance of temporal slowness as a principle for the self-organization of the visual cortex and for technical applications. We first introduce and discuss this principle and put it into mathematical terms. We then define the slow feature analysis (SFA) algorithm, which solves the mathematical problem for multidimensional, discrete time series in a finite dimensional function space. In the main part of the thesis we apply temporal slowness as a learning principle of receptive fields in the visual cortex. Using SFA we learn the input-output functions that, when applied to natural image sequences, vary as slowly as possible in time and thus optimize the slowness objective. The resulting functions can be interpreted as nonlinear spatio-temporal receptive fields and compared to neurons in the primary visual cortex (V1). We find that they reproduce (qualitatively and quantitatively) many of the properties of complex cells in V1, not only the two basic ones, namely a Gabor-like optimal stimulus and phase-shift invariance, but also secondary ones like direction selectivity, non-orthogonal inhibition, end-inhibition and side-inhibition. These results show that a single unsupervised learning principle can account for a rich repertoire of receptive field properties. In order to analyze the nonlinear functions learned by SFA in our model, we developed a set of mathematical and numerical tools to characterize quadratic forms as receptive fields. We expand them in a successive chapter to be of more general interest for theoretical and physiological models. We conclude this thesis by showing the application of the temporal slowness principle to pattern recognition. We reformulate the SFA algorithm such that it can be applied to pattern recognition problems that lack of a temporal structure and present the optimal solutions in this case. We then apply the system to a standard handwritten digits database with good performance
    corecore